3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

TSDASeg: A Two-Stage Model with Direct Alignment for Interactive Point Cloud Segmentation

Add code
Jun 26, 2025
Viaarxiv icon

PanSt3R: Multi-view Consistent Panoptic Segmentation

Add code
Jun 26, 2025
Viaarxiv icon

A Survey of Multi-sensor Fusion Perception for Embodied AI: Background, Methods, Challenges and Prospects

Add code
Jun 24, 2025
Viaarxiv icon

SAM4D: Segment Anything in Camera and LiDAR Streams

Add code
Jun 26, 2025
Viaarxiv icon

ReCoGNet: Recurrent Context-Guided Network for 3D MRI Prostate Segmentation

Add code
Jun 24, 2025
Viaarxiv icon

AnchorDP3: 3D Affordance Guided Sparse Diffusion Policy for Robotic Manipulation

Add code
Jun 24, 2025
Viaarxiv icon

FreeQ-Graph: Free-form Querying with Semantic Consistent Scene Graph for 3D Scene Understanding

Add code
Jun 16, 2025
Viaarxiv icon

TACS-Graphs: Traversability-Aware Consistent Scene Graphs for Ground Robot Indoor Localization and Mapping

Add code
Jun 17, 2025
Viaarxiv icon

LogoSP: Local-global Grouping of Superpoints for Unsupervised Semantic Segmentation of 3D Point Clouds

Add code
Jun 09, 2025
Viaarxiv icon

VisLanding: Monocular 3D Perception for UAV Safe Landing via Depth-Normal Synergy

Add code
Jun 17, 2025
Viaarxiv icon